5 research outputs found
Engineering uncertain time for its practical integration in ontologies
Ontologies are commonly used as a strategy for knowledge representation. However, they are still
presenting limitations to model domains that require broad forms of temporal reasoning. This study is part of
the Onto-mQoL project and was motivated by the real need to extend static ontologies with diverse time
concepts, relations and properties, which go beyond the commonly used Allen´s Interval Algebra. Therefore,
we use the n-ary relations as the basis for temporal structures, which minimally modify the original ontology,
and extend these structures with a generic set of time concepts (moments and intervals), time concept
properties (precise and uncertain), time relations (interval-interval, interval-moment, and moment-moment),
and time relation properties (qualitative and quantitative). We divided the scientific contribution of this study
into three parts. Firstly, we present the ontological temporal model (classes and properties) and how it is
integrated into static ontologies. Secondly, we discuss the creation of axioms that give the semantics for
precise temporal elements. Finally, as our main contribution, these ideas are extended with axioms for
uncertain time. All these elements follow the Ontology Web Language (OWL) standards, so this proposal is
still compatible with the main ontology editors and reasoners currently available. A case example
demonstrates the use of this approach in the nutrition assessment domain.</p
Behavioral Data Categorization for Transformers-based Models in Digital Health
Transformers are recent deep learning (DL) models
used to capture the dependence between parts of sequential data.
While their potential was already demonstrated in the natural
language processing (NLP) domain, emerging research shows
transformers can also be an adequate modeling approach to relate
longitudinal multi-featured continuous behavioral data to future
health outcomes. As transformers-based predictions are based on
a domain lexicon, the use of categories, commonly used in
specialized areas to cluster values, is the likely way to compose
lexica. However, the number of categories may influence the
transformer prediction accuracy, mainly when the categorization
process creates imbalanced datasets, or the search space is very
restricted to generate optimal feasible solutions. This paper
analyzes the relationship between models’ accuracy and the
sparsity of behavioral data categories that compose the lexicon.
This analysis relies on a case example that uses mQoL-
Transformer to model the influence of physical activity behavior
on sleep health. Results show that the number of categories shall
be treated as a further transformer’s hyperparameter, which can
balance the literature-based categorization and optimization
aspects. Thus, DL processes could also obtain similar accuracies
compared to traditional approaches, such as long short-term
memory, when used to process short behavioral data sequence</p